28 research outputs found

    Trust and distrust in contradictory information transmission

    Get PDF
    We analyse the problem of contradictory information distribution in networks of agents with positive and negative trust. The networks of interest are built by ranked agents with different epistemic attitudes. In this context, positive trust is a property of the communication between agents required when message passing is executed bottom-up in the hierarchy, or as a result of a sceptic agent checking information. These two situations are associated with a confirmation procedure that has an epistemic cost. Negative trust results from refusing verification, either of contradictory information or because of a lazy attitude. We offer first a natural deduction system called SecureNDsim to model these interactions and consider some meta-theoretical properties of its derivations. We then implement it in a NetLogo simulation to test experimentally its formal properties. Our analysis concerns in particular: conditions for consensus-reaching transmissions; epistemic costs induced by confirmation and rejection operations; the influence of ranking of the initially labelled nodes on consensus and costs; complexity results

    Inference of development activities from interaction with uninstrumented applications

    Get PDF
    Studying developers’ behavior in software development tasks is crucial for designing effective techniques and tools to support developers’ daily work. In modern software development, developers frequently use different applications including IDEs, Web Browsers, documentation software (such as Office Word, Excel, and PDF applications), and other tools to complete their tasks. This creates significant challenges in collecting and analyzing developers’ behavior data. Researchers usually instrument the software tools to log developers’ behavior for further studies. This is feasible for studies on development activities using specific software tools. However, instrumenting all software tools commonly used in real work settings is difficult and requires significant human effort. Furthermore, the collected behavior data consist of low-level and fine-grained event sequences, which must be abstracted into high-level development activities for further analysis. This abstraction is often performed manually or based on simple heuristics. In this paper, we propose an approach to address the above two challenges in collecting and analyzing developers’ behavior data. First, we use our ActivitySpace framework to improve the generalizability of the data collection. ActivitySpace uses operating-system level instrumentation to track developer interactions with a wide range of applications in real work settings. Secondly, we use a machine learning approach to reduce the human effort to abstract low-level behavior data. Specifically, considering the sequential nature of the interaction data, we propose a Condition Random Field (CRF) based approach to segment and label the developers’ low-level actions into a set of basic, yet meaningful development activities. To validate the generalizability of the proposed data collection approach, we deploy the ActivitySpace framework in an industry partner’s company and collect the real working data from ten professional developers’ one-week work in three actual software projects. The experiment with the collected data confirms that with initial human-labeled training data, the CRF model can be trained to infer development activities from low-level actions with reasonable accuracy within and across developers and software projects. This suggests that the machine learning approach is promising in reducing the human efforts required for behavior data analysis.This work was partially supported by NSFC Program (No. 61602403 and 61572426)

    Transparency and Trust in Human-AI-Interaction: The Role of Model-Agnostic Explanations in Computer Vision-Based Decision Support

    Full text link
    Computer Vision, and hence Artificial Intelligence-based extraction of information from images, has increasingly received attention over the last years, for instance in medical diagnostics. While the algorithms' complexity is a reason for their increased performance, it also leads to the "black box" problem, consequently decreasing trust towards AI. In this regard, "Explainable Artificial Intelligence" (XAI) allows to open that black box and to improve the degree of AI transparency. In this paper, we first discuss the theoretical impact of explainability on trust towards AI, followed by showcasing how the usage of XAI in a health-related setting can look like. More specifically, we show how XAI can be applied to understand why Computer Vision, based on deep learning, did or did not detect a disease (malaria) on image data (thin blood smear slide images). Furthermore, we investigate, how XAI can be used to compare the detection strategy of two different deep learning models often used for Computer Vision: Convolutional Neural Network and Multi-Layer Perceptron. Our empirical results show that i) the AI sometimes used questionable or irrelevant data features of an image to detect malaria (even if correctly predicted), and ii) that there may be significant discrepancies in how different deep learning models explain the same prediction. Our theoretical discussion highlights that XAI can support trust in Computer Vision systems, and AI systems in general, especially through an increased understandability and predictability

    Modelling trust in artificial agents, a first step toward the analysis of e-trust

    Get PDF
    “The original publication is available at www.springerlink.com”. Copyright Springer.This paper provides a new analysis of e-trust, trust occurring in digital contexts, among the artificial agents of a distributed artificial system. The analysis endorses a non-psychological approach and rests on a Kantian regulative ideal of a rational agent, able to choose the best option for itself, given a specific scenario and a goal to achieve. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis of this phenomenon. The analysis first focuses on an agent’s trustworthiness, this one is presented as the necessary requirement for e-trust to occur. Then, a new definition of e-trust as a second-orderproperty of first-order relations is presented. It is shown that the second-orderproperty of e-trust has the effect of minimising an agent’s effort and commitment in the achievement of a given goal. On this basis, a method is provided for the objective assessment of the levels of e-trust occurring among the artificial agents of a distributed artificial system.Peer reviewe

    Research on Factors Affecting Mobile E-Commerce Consumer Trust

    No full text

    Rutnätsbaserad multisensorfusion för detektering av hinder på vägen: tillämpning på självkörande bilar

    Get PDF
    Self-driving cars have recently become a challenging research topic, with the aim of making transportation safer and more efficient. Current advanced driving assistance systems (ADAS) allow cars to drive autonomously by following lane markings, identifying road signs and detecting pedestrians and other vehicles. In this thesis work we improve the robustness of autonomous cars by designing an on-road obstacle detection system. The proposed solution consists on the low-level fusion of radar and lidar through the occupancy grid framework. Two inference theories are implemented and evaluated: Bayesian probability theory and Dempster-Shafer theory of evidence. Obstacle detection is performed through image processing of the occupancy grid. Last, the Dempster-Shafer additional features are leveraged by proposing a sensor performance estimation module and performing advanced conflict management. The work has been carried out at Volvo Car Corporation, where real experiments on a test vehicle have been performed under different environmental conditions and types of objects. The system has been evaluated according to the quality of the resulting occupancy grids, detection rate as well as information content in terms of entropy. The results show a significant improvement of the detection rate over single-sensor approaches. Furthermore, the Dempster-Shafer implementation may slightly outperform the Bayesian one when there is conflicting information, although the high computational cost limits its practical application. Last, we demonstrate that the proposed solution is easily scalable to include additional sensors.
    corecore